2,215 research outputs found

    Asymptotic Laplacian-Energy-Like Invariant of Lattices

    Full text link
    Let μ1≥μ2≥⋯≥μn\mu_1\ge \mu_2\ge\cdots\ge\mu_n denote the Laplacian eigenvalues of GG with nn vertices. The Laplacian-energy-like invariant, denoted by LEL(G)=∑i=1n−1μiLEL(G)= \sum_{i=1}^{n-1}\sqrt{\mu_i}, is a novel topological index. In this paper, we show that the Laplacian-energy-like per vertex of various lattices is independent of the toroidal, cylindrical, and free boundary conditions. Simultaneously, the explicit asymptotic values of the Laplacian-energy-like in these lattices are obtained. Moreover, our approach implies that in general the Laplacian-energy-like per vertex of other lattices is independent of the boundary conditions.Comment: 6 pages, 2 figure

    HICC: an entropy splitting-based framework for hierarchical co-clustering

    Get PDF
    Abstract Two-dimensional contingency tables or co-occurrence matrices arise frequently in various important applications such as text analysis and web-log mining. As a fundamental research topic, co-clustering aims to generate a meaningful partition of the contingency table to reveal hidden relationships between rows and columns. Traditional co-clustering algorithms usually produce a predefined number of flat partition of both rows and columns, which do not reveal relationship among clusters. To address this limitation, hierarchical co-clustering algorithms have attracted a lot of research interests recently. Although successful in various applications, the existing hierarchical co-clustering algorithms are usually based on certain heuristics and do not have solid theoretical background. In this paper, we present a new co-clustering algorithm, HICC, with solid theoretical background. It simultaneously constructs a hierarchical structure of both row and column clusters, which retains sufficient mutual information between rows and columns of the contingency table. An efficient and effective greedy algorithm is developed, which grows a co-cluster hierarchy by successively performing row-wise or column-wise splits that lead to the maximal mutual information gain. Extensive experiments on both synthetic and real datasets demonstrate that our algorithm can reveal essential relationships of row (and column) clusters and has better clustering precision than existing algorithms. Moreover, the experiments on real dataset show that HICC can effectively reveal hidden relationships between rows and columns in the contingency table
    • …
    corecore